AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Human Feedback Optimization

# Human Feedback Optimization

Ziya Writing LLaMa 13B V1
Gpl-3.0
Ziya Writing Large Model V1 is a 13-billion-parameter instruction fine-tuned model based on LLaMa, specializing in writing tasks such as official documents, speeches, letters, and creative copywriting.
Large Language Model Transformers Supports Multiple Languages
Z
IDEA-CCNL
23
17
Dialogrpt Depth
DialogRPT-depth is a dialogue response ranking model developed by Microsoft Research, focusing on predicting the likelihood of dialogue responses sparking long discussion threads.
Dialogue System Transformers
D
microsoft
110
5
Dialogrpt Updown
DialogRPT-updown is a dialogue response ranking model trained on human feedback data, predicting the likelihood of a dialogue response receiving likes.
Dialogue System Transformers
D
microsoft
1,286
10
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase